11 research outputs found

    Multiobjective Optimization to Optimal Moroccan Diet Using Genetic Algorithm

    Get PDF
    Proper glucose control is designed to prevent or delay the complications of diabetes. Various contexts can lead to a fluctuation of the blood sugar level to a greater or lesser extent. It can be, for example, eating habits, treatment, intense physical activity, etc. The feeding problem interpolated by a minimum cost function is well-known in the literature. The main goal of this paper is to introduce a multiobjective programming model with constraints for the diet problem with two objective functions, the first of which is the total glycemic load of the diet while the second objective function is the cost of the diet. the MOGA (multiobjective Genetic Algorithm) algorithm was used to resolve the proposed model. The experimental results show that our system ([proposed model – MOGA]) is able to produce adequate diets that can settle glycemic load and cost while respecting the patient\u27s requirements

    Intelligent Local Search Optimization Methods to Optimal Morocco Regime

    Get PDF
    In this paper, we compare three well-known swarm algorithms on optimal regime based on our mathematical optimization model introduced recently. Different parameters of this latter are estimated based on 176 foods and on who’s the nutrients values are calculated for 100 g. The daily nutrients needs are estimated based on the expert’s knowledge. Different experimentations are realized for different configurations of the considered swarm algorithms. Compared to Stochastic Fractal Search (SFS) and Particle Swarm Optimization Algorithm (PSO), the Firefly Algorithm (FA) produces the main suitable regimes

    Rapid Localization and Mapping Method Based on Adaptive Particle Filters.

    Get PDF
    With the development of autonomous vehicles, localization and mapping technologies have become crucial to equip the vehicle with the appropriate knowledge for its operation. In this paper, we extend our previous work by prepossessing a localization and mapping architecture for autonomous vehicles that do not rely on GPS, particularly in environments such as tunnels, under bridges, urban canyons, and dense tree canopies. The proposed approach is of two parts. Firstly, a K-means algorithm is employed to extract features from LiDAR scenes to create a local map of each scan. Then, we concatenate the local maps to create a global map of the environment and facilitate data association between frames. Secondly, the main localization task is performed by an adaptive particle filter that works in four steps: (a) generation of particles around an initial state (provided by the GPS); (b) updating the particle positions by providing the motion (translation and rotation) of the vehicle using an inertial measurement device; (c) selection of the best candidate particles by observing at each timestamp the match rate (also called particle weight) of the local map (with the real-time distances to the objects) and the distances of the particles to the corresponding chunks of the global map; (d) averaging the selected particles to derive the estimated position, and, finally, using a resampling method on the particles to ensure the reliability of the position estimation. The performance of the newly proposed technique is investigated on different sequences of the Kitti and Pandaset raw data with different environmental setups, weather conditions, and seasonal changes. The obtained results validate the performance of the proposed approach in terms of speed and representativeness of the feature extraction for real-time localization in comparison with other state-of-the-art methods

    Opt-RNN-DBSVM: Optimal recurrent neural network density based support vector machine

    No full text
    When implementing SVMs, two major problems are encountered: (a) the number of local minima increases exponentially with the number of samples and (b) the quantity of required computer storage, required for a regular quadratic programming solver, increases by an exponential magnitude as the problem size expands.  The Kernel-Adatron family of algorithms gaining attention lately which has allowed to handle very large classification and regression problems. However, these methods treat different types of samples (Noise, border, and core) with the same manner, which causes searches in unpromising areas and increases the number of iterations. In this work , we introduce a hybrid method to overcome these shortcoming, namely Optimal Recurrent Neural Network Density Based Support Vector Machine (Opt-RNN-DBSVM).  This method consists of four steps: (a) characterization of different samples, (b) elimination of samples with a low probability of being a support vector, (c) construction of an appropriate recurrent neural network based on an original energy function, and (d) solution of the system of differential equations, managing the dynamics of the RNN, using the Euler-Cauchy method involving an optimal time step. Thanks to its recurrent architecture, the RNN remembers the regions explored during the search process. We demonstrated that RNN-SVM converges to feasible support vectors and Opt-RNN-DBSVM has a very low time complexity compared to RNN-SVM with constant time step, and KAs-SVM. Several experiments were performed on academic data sets. We used several classification performance measures to compare Opt-RNN-DBSVM to different classification methods and the results obtained show the good performance of the proposed  method

    OPT-RNN-DBSVM: OPTimal Recurrent Neural Network and Density-Based Support Vector Machine

    Get PDF
    When implementing SVMs, two major problems are encountered: (a) the number of local minima of dual-SVM increases exponentially with the number of samples and (b) the computer storage memory required for a regular quadratic programming solver increases exponentially as the problem size expands. The Kernel-Adatron family of algorithms, gaining attention recently, has allowed us to handle very large classification and regression problems. However, these methods treat different types of samples (i.e., noise, border, and core) in the same manner, which makes these algorithms search in unpromising areas and increases the number of iterations as well. This paper introduces a hybrid method to overcome such shortcomings, called the Optimal Recurrent Neural Network and Density-Based Support Vector Machine (Opt-RNN-DBSVM). This method consists of four steps: (a) the characterization of different samples, (b) the elimination of samples with a low probability of being a support vector, (c) the construction of an appropriate recurrent neural network to solve the dual-DBSVM based on an original energy function, and (d) finding the solution to the system of differential equations that govern the dynamics of the RNN, using the Euler–Cauchy method involving an optimal time step. Density-based preprocessing reduces the number of local minima in the dual-SVM. The RNN’s recurring architecture avoids the need to explore recently visited areas. With the optimal time step, the search moves from the current vectors to the best neighboring support vectors. It is demonstrated that RNN-SVM converges to feasible support vectors and Opt-RNN-DBSVM has very low time complexity compared to the RNN-SVM with a constant time step and the Kernel-Adatron algorithm–SVM. Several classification performance measures are used to compare Opt-RNN-DBSVM with different classification methods and the results obtained show the good performance of the proposed method

    Localisation and Mapping of Self-driving Vehicles based on Fuzzy K-means Clustering: A Non-semantic Approach

    No full text
    Localisation and mapping are crucial for autonomous vehicles, as they inform the vehicle of where exactly they are in their environment as well as relevant infrastructures within the identified environment. This paper demonstrates the ability of non-semantic features to represent point clouds and use them to explain the environment. Our proposed architecture uses the Fuzzy K-means approach to extract features from LiDAR scenes in order to reduce the feature map and guarantee that the features are identifiable in each frame. Secondly, global mapping is done with the Gaussian Mixture Model (GMM) to facilitate data association between the frames to be mapped and helps localisation tasks to be performed accurately by the particle filter. The performance of the proposed technique is compared to other state of the art methods over different sequences of the Kitti raw dataset with different environmental structures, weather conditions and seasonal changes. The results obtained demonstrates the superiority of the proposed approach in terms of speed and representativeness of features needed for real-time localisation. Moreso, we achieved competitive accuracies compared to other state-of-the-art methods

    Multi-objectives optimization and convolution fuzzy C-means: control of diabetic population dynamic

    No full text
    The optimal control models proposed in the literature to control a population of diabetics are all single-objective which limits the identification of alternatives and potential opportunities for different reasons: the minimization of the total does not necessarily imply the minimization of different terms and two patients from two different compartments may not support the same intensity of exercise or the same severity of regime. In this work, we propose a multi-objectives optimal control model to control a population of diabetics taking into account the specificity of each compartment such that each objective function involves a single compartment and a single control. In addition, the Pontryagin’s maximum principle results in expansive control that devours all resources because of max-min operators and the control formula is very complex and difficult to assimilate by the diabetologists. In our case, we use a multi-objectives heuristic method, NSGA-II, to estimate the optimal control based on our model. Since the objective functions are conflicting, we obtain the Pareto optimal front formed by the non-dominated solutions and we use fuzzy C-means to determine the important main strategies based on a typical characterization. To limit human intervention, during the control period, we use the convolution operator to reduce hyper-fluctuations using kernels with different size. Several experiments were conducted and the proposed system highlights four feasible control strategies capable of mitigating socio-economic damages for a reasonable budget

    FP-Conv-CM: Fuzzy Probabilistic Convolution C-Means

    Get PDF
    Soft computing models based on fuzzy or probabilistic approaches provide decision system makers with the necessary capabilities to deal with imprecise and incomplete information. Hybrid systems based on different soft computing approaches with complementary qualities and principles have also become popular. On the one hand, fuzzy logic makes its decisions on the basis of the degree of membership but gives no information on the frequency of an event; on the other hand, the probability informs us of the frequency of the event but gives no information on the degree of membership to a set. In this work, we propose a new measure that implements both fuzzy and probabilistic notions (i.e., the degree of membership and the frequency) while exploiting the ability of the convolution operator to combine functions on continuous intervals. This measure evaluates both the degree of membership and the frequency of objects/events in the design of decision support systems. We show, using concrete examples, the drawbacks of fuzzy logic and probability-based approaches taken separately, and we then show how a fuzzy probabilistic convolution measure allows the correction of these drawbacks. Based on this measure, we introduce a new clustering method named Fuzzy-Probabilistic-Convolution-C-Means (FP-Conv-CM). Fuzzy C-Means (FCM), Probabilistic K-Means (PKM), and FP-Conv-CM were tested on multiple datasets and compared on the basis of two performance measures based on the Silhouette metric and the Dunn’s Index. FP-Conv-CM was shown to improve on both metrics. In addition, FCM, PKM, and FP-Conv-CM were used for multiple image compression tasks and were compared based on three performance measures: Mean Square Error (MSE), Peak Signal-to-Noise Ratio (PSNR), and Structural SImilarity Index (SSIM). The proposed FP-Conv-CM method shows improvements in all these three measures as well
    corecore